Goto

Collaborating Authors

 ai progress


6 Graphs That Show Where the U.S. Leads China on AI--and Where It Doesn't

TIME - Tech

Two important things happened on January 20, 2025. In Washington, D.C., Donald Trump was inaugurated as President of the United States. In Hangzhou, China, a little-known Chinese firm called DeepSeek released R1, an AI model that industry watchers called a "Sputnik moment" for the country's AI industry. "Whether we like it or not, we're suddenly engaged in a fast-paced competition to build and define this groundbreaking technology that will determine so much about the future of civilization," said Trump later that year, as he announced his administration's AI action plan, which was titled "Winning the Race." There are many interpretations of what AI companies and their governments are racing towards, says AI policy researcher Lennart Heim: to deploy AI systems in the economy, to build robots, to create human-like artificial general intelligence.


GPT-5's modest gains suggest AI progress is slowing down

New Scientist

GPT-5 is the latest version of OpenAI's large language model OpenAI has released its newest AI model, GPT-5, two years after rolling out GPT-4, whose success has driven ChatGPT towards world domination. But despite promises of a similar jump in capability, GPT-5 appears to show little improvement over other leading AI models, hinting that the industry may need a fresh approach to build more intelligent AI systems. OpenAI's own pronouncements hail GPT-5 as a "significant leap in intelligence" from the company's previous models, showing apparent improvements in programming, mathematics, writing, health information and visual understanding. It also promises less frequent hallucinations, which is when an AI presents false information as true. On an internal benchmark measuring "performance on complex, economically valuable knowledge work", OpenAI says GPT‑5 is "comparable to or better than experts in roughly half the cases… across tasks spanning over 40 occupations including law, logistics, sales, and engineering."


Has AI Progress Really Slowed Down?

TIME - Tech

For over a decade, companies have bet on a tantalizing rule of thumb: that artificial intelligence systems would keep getting smarter if only they found ways to continue making them bigger. In 2017, researchers at Chinese technology firm Baidu demonstrated that pouring more data and computing power into machine learning algorithms yielded mathematically predictable improvements--regardless of whether the system was designed to recognize images, speech, or generate language. Noticing the same trend, in 2020, OpenAI coined the term "scaling laws," which has since become a touchstone of the industry. This thesis prompted AI firms to bet hundreds of millions on ever-larger computing clusters and datasets. The gamble paid off handsomely, transforming crude text machines into today's articulate chatbots.


Strategic Insights from Simulation Gaming of AI Race Dynamics

Gruetzemacher, Ross, Avin, Shahar, Fox, James, Saeri, Alexander K

arXiv.org Artificial Intelligence

Drawing on the experiences of facilitators who have overseen 43 games over a four-year period, we illuminate recurring patterns, strategies, and decision-making processes observed during gameplay. Our analysis reveals key strategic considerations about AI development trajectories in this simulated environment, including: the destabilising effects of AI races, the crucial role of international cooperation in mitigating catastrophic risks, the challenges of aligning corporate and national interests, and the potential for rapid, transformative change in AI capabilities. We highlight places where we believe the game has been effective in exposing participants to the complexities and uncertainties inherent in AI governance. Key recurring gameplay themes include the emergence of international agreements, challenges to the robustness of such agreements, the critical role of cybersecurity in AI development, and the potential for unexpected crises to dramatically alter AI trajectories. By documenting these insights, we aim to provide valuable foresight for policymakers, industry leaders, and researchers navigating the complex landscape of AI development and governance.


The Economist Breaking Ranks to Warn of AI's Transformative Power

TIME - Tech

Technologists tend to predict that the economic impacts of their creations will be unprecedented--and this is especially true when it comes to artificial intelligence. Last year, Elon Musk predicted that continued advances in AI would render human labor obsolete. OpenAI CEO Sam Altman has written that AI will inevitably continue the shift in economic power from labor to capital and create "phenomenal wealth." Jensen Huang, CEO of semiconductor design firm Nvidia, has compared AI's development and deployment to a "new industrial revolution." But while the technologists are bullish on the economic impacts of AI, members of that other technocratic priesthood with profound influence over public life--the economists--are not.


When Might AI Outsmart Us? It Depends Who You Ask

TIME - Tech

In 1960, Herbert Simon, who went on to win both the Nobel Prize for economics and the Turing Award for computer science, wrote in his book The New Science of Management Decision that "machines will be capable, within 20 years, of doing any work that a man can do." History is filled with exuberant technological predictions that have failed to materialize. Within the field of artificial intelligence, the brashest predictions have concerned the arrival of systems that can perform any task a human can, often referred to as artificial general intelligence, or AGI. So when Shane Legg, Google DeepMind's co-founder and chief AGI scientist, estimates that there's a 50% chance that AGI will be developed by 2028, it might be tempting to write him off as another AI pioneer who hasn't learnt the lessons of history. Still, AI is certainly progressing rapidly.


Thousands of AI Authors on the Future of AI

Grace, Katja, Stewart, Harlan, Sandkühler, Julia Fabienne, Thomas, Stephen, Weinstein-Raun, Ben, Brauner, Jan

arXiv.org Artificial Intelligence

In the largest survey of its kind, 2,778 researchers who had published in top-tier artificial intelligence (AI) venues gave predictions on the pace of AI progress and the nature and impacts of advanced AI systems The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model. If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier [Grace et al., 2022]. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey). Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 38% and 51% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction. More than half suggested that "substantial" or "extreme" concern is warranted about six different AI-related scenarios, including misinformation, authoritarian control, and inequality. There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more.


What the OpenAI drama means for AI progress -- and safety

Nature

OpenAI fired its charismatic chief executive, Sam Altman, on 17 November -- but has now reinstated him.Credit: Justin Sullivan/Getty OpenAI -- the company behind the blockbuster artificial intelligence (AI) bot ChatGPT -- has been consumed by frenzied changes for almost a week. On 17 November, the company fired its charismatic chief executive, Sam Altman. Five days, and much drama, later, OpenAI announced that Altman would return with an overhaul of the company's board. The debacle has thrown the spotlight on an ongoing debate about how commercial competition is shaping the development of AI systems, and how quickly AI can be deployed ethically and safely. "The push to retain dominance is leading to toxic competition. It's a race to the bottom," says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.


4 Charts That Show Why AI Progress Is Unlikely to Slow Down

TIME - Tech

In the last ten years, AI systems have developed at rapid speed. From the breakthrough of besting a legendary player at the complex game Go in 2016, AI is now able to recognize images and speech better than humans, and pass tests including business school exams and Amazon coding interview questions. Last week, during a U.S. Senate Judiciary Committee hearing about regulating AI, Senator Richard Blumenthal of Connecticut described the reaction of his constituents to recent advances in AI. "The word that has been used repeatedly is scary." The Subcommittee on Privacy, Technology, and the Law overseeing the meeting heard testimonies from three expert witnesses, who stressed the pace of progress in AI. One of those witnesses, Dario Amodei, CEO of prominent AI company Anthropic, said that "the single most important thing to understand about AI is how fast it is moving."


The AI arms race is on. But we should slow down AI progress instead. - Vox

Stanford HAI

"Computers need to be accountable to machines," a top Microsoft executive told a roomful of reporters in Washington, DC, on February 10, three days after the company launched its new AI-powered Bing search engine. Computers need to be accountable to people!" he said, and then made sure to clarify, "That was not a Freudian slip." Slip or not, the laughter in the room betrayed a latent anxiety. Progress in artificial intelligence has been moving so unbelievably fast lately that the question is becoming unavoidable: How long until AI dominates our world to the point where we're answering to it rather than it answering to us? First, last year, we got DALL-E 2 and Stable Diffusion, which can turn a few words of text into a stunning image. Then Microsoft-backed OpenAI gave us ChatGPT, which can write essays so convincing that it freaks out everyone from teachers (what if it helps students cheat?) to journalists (could it replace them?) to disinformation experts (will it amplify conspiracy ...